Adversarial scratches: Deployable attacks to CNN classifiers
نویسندگان
چکیده
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form small perturbations applied model's input which lead incorrect predictions. Unfortunately, most literature focuses on visually imperceivable be digital images often are, by design, impossible deployed physical targets. We present Adversarial Scratches: a novel L0 black-box attack, takes scratches in images, and possesses much greater deployability than other state-of-the-art attacks. Scratches leverage B\'ezier Curves reduce dimension search space possibly constrain attack specific location. test several scenarios, including publicly available API traffic signs. Results show that, often, our achieves higher fooling rate deployable methods, while requiring significantly fewer queries modifying very few pixels.
منابع مشابه
Are Generative Classifiers More Robust to Adversarial Attacks?
There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers which only models the conditional distribution of the labels given the inputs. In this abstract we propose deep Bayes classifier that improves the c...
متن کاملSparsity-based Defense against Adversarial Attacks on Linear Classifiers
Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such “adversa...
متن کاملBagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks
Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by “poisoning” its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is tr...
متن کاملDefense-gan: Protecting Classifiers against Adversarial Attacks Using Generative Models
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend de...
متن کاملDeployable Classifiers for Malware Detection
The application of machine learning methods to malware detection has opened up possibilities of generating large number of classifiers that use different kinds of features and learning algorithms. A straightforward way to select the best classifier is to pick the one with best holdout or cross-validation performance. Cross-validation or holdout gives a point estimate of generalization performan...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition
سال: 2023
ISSN: ['1873-5142', '0031-3203']
DOI: https://doi.org/10.1016/j.patcog.2022.108985